本申请论文对拓扑数据分析(TDA)的适用性进行了全面的实验评估,以进行湍流的定量比较。具体而言,我们的研究记录了流动肠的最大值(已建立的涡度指标)的持续图,用于180个集合成员的拓扑表示,这是由五个数值求解器的参数空间的粗略采样而产生的。我们记录了域专家报告的五个主要假设,描述了他们对不同求解器配置产生的流量变异性的期望。我们贡献了三种评估方案,以通过两种比较度量评估上述假设的验证:(i)科学成像(L2规范)中使用的标准距离和(ii)持久图之间的已建立拓扑距离(L2-Wasserstein Metricric )。在输入集合上进行的广泛实验表明,由于其涡旋的配置,拓扑距离(II)报告彼此相近的流量相似,预计将与域专家相似。总体而言,我们的研究报告的见解带来了TDA代表和比较湍流的适用性的实验证据,从而使流体动态社区对未来工作的使用量提供了信心。此外,我们的流数据和评估协议为TDA社区提供了一个由应用程序批准的基准测试,用于评估和设计进一步的拓扑距离。
translated by 谷歌翻译
本文介绍了合并树木主要测量分析(MT-PGA)的计算框架,这是对著名的主要组件分析(PCA)框架[87]对合并树的瓦斯坦斯坦度量空间[92]的新颖调整。我们将MT-PGA计算作为一个约束优化问题,旨在调整正交测量轴的基础,同时最大程度地减少拟合能量。我们引入了一种有效的,迭代的算法,该算法利用了共享记忆并行性以及拟合能量梯度的分析表达,以确保快速迭代。我们的方法还琐碎地扩展到极值持久图。对公共集合的广泛实验证明了我们方法的效率 - 最大示例中的MT -PGA计算在分钟内进行了计算。我们通过扩展了两个典型的PCA应用程序来展示我们的贡献的实用性。首先,我们将MT-PGA应用于数据降低,并通过以MT-PGA为基础的第一批坐标来可靠地压缩合并树。其次,我们提出一个利用MT-PGA基础的前两个方向来生成合奏的二维布局,提出了一个维度降低框架。我们以持久性相关视图来增强这些布局,从而实现整体和局部视觉检查集合中的特征可变性。在这两种应用中,定量实验评估我们框架的相关性。最后,我们提供了轻巧的C ++实现,可用于复制我们的结果。
translated by 谷歌翻译
本文介绍了用于持久图计算的有效算法,给定一个输入分段线性标量字段f在D上定义的d二维简单复杂k,并带有$ d \ leq 3 $。我们的方法通过引入三个主要加速度来扩展开创性的“ Paircells”算法。首先,我们在离散摩尔斯理论的设置中表达了该算法,该算法大大减少了要考虑的输入简单数量。其次,我们介绍了问题的分层方法,我们称之为“夹心”。具体而言,minima-saddle持久性对($ d_0(f)$)和鞍 - 最大持久对($ d_ {d-1}(f)$)是通过与Union-Find-Find-Find-Find-Find-Find-Find-Find-find-find-find-find-find-find-find-find-find-find-find-find-find of nourstable组的1个有效计算的。 - addles和(D-1)addles的稳定集。尺寸为0和(D-1)的快速处理进一步减少,并且大幅度降低了$ d_1(f)$,即三明治的中间层的计算$ d_1(f)$的关键简单数量。第三,我们通过共享记忆并行性记录了几个绩效改进。我们为可重复性目的提供了算法的开源实施。我们还贡献了一个可重复的基准软件包,该基准软件包利用了公共存储库中的三维数据,并将我们的算法与各种公开可用的实现进行了比较。广泛的实验表明,我们的算法提高了两个数量级,即它扩展的开创性“ Paircells”算法的时间性能。此外,它还改善了14种竞争方法的选择,改善了记忆足迹和时间性能,比最快的可用方法具有可观的增长,同时产生了严格的输出。我们通过应用于表面,音量数据和高维点云的持续性一维发电机的快速和稳健提取的应用来说明我们的贡献实用性。
translated by 谷歌翻译
In this paper, we address the problem of multimodal emotion recognition from multiple physiological signals. We demonstrate that a Transformer-based approach is suitable for this task. In addition, we present how such models may be pretrained in a multimodal scenario to improve emotion recognition performances. We evaluate the benefits of using multimodal inputs and pre-training with our approach on a state-ofthe-art dataset.
translated by 谷歌翻译
Contrastive representation learning has proven to be an effective self-supervised learning method for images and videos. Most successful approaches are based on Noise Contrastive Estimation (NCE) and use different views of an instance as positives that should be contrasted with other instances, called negatives, that are considered as noise. However, several instances in a dataset are drawn from the same distribution and share underlying semantic information. A good data representation should contain relations between the instances, or semantic similarity and dissimilarity, that contrastive learning harms by considering all negatives as noise. To circumvent this issue, we propose a novel formulation of contrastive learning using semantic similarity between instances called Similarity Contrastive Estimation (SCE). Our training objective is a soft contrastive one that brings the positives closer and estimates a continuous distribution to push or pull negative instances based on their learned similarities. We validate empirically our approach on both image and video representation learning. We show that SCE performs competitively with the state of the art on the ImageNet linear evaluation protocol for fewer pretraining epochs and that it generalizes to several downstream image tasks. We also show that SCE reaches state-of-the-art results for pretraining video representation and that the learned representation can generalize to video downstream tasks.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Information spread on networks can be efficiently modeled by considering three features: documents' content, time of publication relative to other publications, and position of the spreader in the network. Most previous works model up to two of those jointly, or rely on heavily parametric approaches. Building on recent Dirichlet-Point processes literature, we introduce the Houston (Hidden Online User-Topic Network) model, that jointly considers all those features in a non-parametric unsupervised framework. It infers dynamic topic-dependent underlying diffusion networks in a continuous-time setting along with said topics. It is unsupervised; it considers an unlabeled stream of triplets shaped as \textit{(time of publication, information's content, spreading entity)} as input data. Online inference is conducted using a sequential Monte-Carlo algorithm that scales linearly with the size of the dataset. Our approach yields consequent improvements over existing baselines on both cluster recovery and subnetworks inference tasks.
translated by 谷歌翻译
The publication time of a document carries a relevant information about its semantic content. The Dirichlet-Hawkes process has been proposed to jointly model textual information and publication dynamics. This approach has been used with success in several recent works, and extended to tackle specific challenging problems --typically for short texts or entangled publication dynamics. However, the prior in its current form does not allow for complex publication dynamics. In particular, inferred topics are independent from each other --a publication about finance is assumed to have no influence on publications about politics, for instance. In this work, we develop the Multivariate Powered Dirichlet-Hawkes Process (MPDHP), that alleviates this assumption. Publications about various topics can now influence each other. We detail and overcome the technical challenges that arise from considering interacting topics. We conduct a systematic evaluation of MPDHP on a range of synthetic datasets to define its application domain and limitations. Finally, we develop a use case of the MPDHP on Reddit data. At the end of this article, the interested reader will know how and when to use MPDHP, and when not to.
translated by 谷歌翻译
Time series is the most prevalent form of input data for educational prediction tasks. The vast majority of research using time series data focuses on hand-crafted features, designed by experts for predictive performance and interpretability. However, extracting these features is labor-intensive for humans and computers. In this paper, we propose an approach that utilizes irregular multivariate time series modeling with graph neural networks to achieve comparable or better accuracy with raw time series clickstreams in comparison to hand-crafted features. Furthermore, we extend concept activation vectors for interpretability in raw time series models. We analyze these advances in the education domain, addressing the task of early student performance prediction for downstream targeted interventions and instructional support. Our experimental analysis on 23 MOOCs with millions of combined interactions over six behavioral dimensions show that models designed with our approach can (i) beat state-of-the-art educational time series baselines with no feature extraction and (ii) provide interpretable insights for personalized interventions. Source code: https://github.com/epfl-ml4ed/ripple/.
translated by 谷歌翻译
This paper reviews existing work in software engineering that applies statistical causal inference methods. These methods aim at estimating causal effects from observational data. The review covers 32 papers published between 2010 and 2022. Our results show that the application of statistical causal inference methods is relatively recent and that the corresponding research community remains relatively fragmented.
translated by 谷歌翻译